Lazy learning

(Not to be confused with the lazy learning regime, see Neural_tangent_kernel).

In machine learning, lazy learning is a learning method in which generalization of the training data is, in theory, delayed until a query is made to the system, as opposed to eager learning, where the system tries to generalize the training data before receiving queries.[1]

The primary motivation for employing lazy learning, as in the K-nearest neighbors algorithm, used by online recommendation systems ("people who viewed/purchased/listened to this movie/item/tune also ...") is that the data set is continuously updated with new entries (e.g., new items for sale at Amazon, new movies to view at Netflix, new clips at YouTube, new music at Spotify or Pandora). Because of the continuous update, the "training data" would be rendered obsolete in a relatively short time especially in areas like books and movies, where new best-sellers or hit movies/music are published/released continuously. Therefore, one cannot really talk of a "training phase".

Lazy classifiers are most useful for large, continuously changing datasets with few attributes that are commonly queried. Specifically, even if a large set of attributes exist - for example, books have a year of publication, author/s, publisher, title, edition, ISBN, selling price, etc. - recommendation queries rely on far fewer attributes - e.g., purchase or viewing co-occurrence data, and user ratings of items purchased/viewed.[2]

  1. ^ Aha, David (29 June 2013). Lazy Learning (illustrated ed.). Springer Science & Business Media, 2013. p. 424. ISBN 978-9401720533. Retrieved 30 September 2021.
  2. ^ Tamrakar, Preeti; Roy, Siddharth Singha; Satapathy, Biswajit; Ibrahim, S. P. Syed (2019). Integration of lazy learning associative classification with kNN algorithm. pp. 1–4. doi:10.1109/ViTECoN.2019.8899415. ISBN 978-1-5386-9353-7.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by razib.in